Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20245409

ABSTRACT

Nowadays, with the outbreak of COVID-19, the prevention and treatment of COVID-19 has gradually become the focus of social disease prevention, and most patients are also more concerned about the symptoms. COVID-19 has symptoms similar to the common cold, and it cannot be diagnosed based on the symptoms shown by the patient, so it is necessary to observe medical images of the lungs to finally determine whether they are COVID-19 positive. As the number of patients with symptoms similar to pneumonia increases, more and more medical images of the lungs need to be generated. At the same time, the number of physicians at this stage is far from meeting the needs of patients, resulting in patients unable to detect and understand their own conditions in time. In this regard, we have performed image augmentation, data cleaning, and designed a deep learning classification network based on the data set of COVID-19 lung medical images. accurate classification judgment. The network can achieve 95.76% classification accuracy for this task through a new fine-tuning method and hyperparameter tuning we designed, which has higher accuracy and less training time than the classic convolutional neural network model. © 2023 SPIE.

2.
17th European Conference on Computer Vision, ECCV 2022 ; 13807 LNCS:500-516, 2023.
Article in English | Scopus | ID: covidwho-2266327

ABSTRACT

Since COVID strongly affects the respiratory system, lung CT-scans can be used for the analysis of a patients health. We introduce a neural network for the prediction of the severity of lung damage and the detection of a COVID-infection using three-dimensional CT-data. Therefore, we adapt the recent ConvNeXt model to process three-dimensional data. Furthermore, we design and analyze different pretraining methods specifically designed to improve the models ability to handle three-dimensional CT-data. We rank 2nd in the 1st COVID19 Severity Detection Challenge and 3rd in the 2nd COVID19 Detection Challenge. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

3.
World Wide Web ; : 1-18, 2022 Aug 27.
Article in English | MEDLINE | ID: covidwho-2242664

ABSTRACT

Medical reports have significant clinical value to radiologists and specialists, especially during a pandemic like COVID. However, beyond the common difficulties faced in the natural image captioning, medical report generation specifically requires the model to describe a medical image with a fine-grained and semantic-coherence paragraph that should satisfy both medical commonsense and logic. Previous works generally extract the global image features and attempt to generate a paragraph that is similar to referenced reports; however, this approach has two limitations. Firstly, the regions of primary interest to radiologists are usually located in a small area of the global image, meaning that the remainder parts of the image could be considered as irrelevant noise in the training procedure. Secondly, there are many similar sentences used in each medical report to describe the normal regions of the image, which causes serious data bias. This deviation is likely to teach models to generate these inessential sentences on a regular basis. To address these problems, we propose an Auxiliary Signal-Guided Knowledge Encoder-Decoder (ASGK) to mimic radiologists' working patterns. Specifically, the auxiliary patches are explored to expand the widely used visual patch features before fed to the Transformer encoder, while the external linguistic signals help the decoder better master prior knowledge during the pre-training process. Our approach performs well on common benchmarks, including CX-CHR, IU X-Ray, and COVID-19 CT Report dataset (COV-CTR), demonstrating combining auxiliary signals with transformer architecture can bring a significant improvement in terms of medical report generation. The experimental results confirm that auxiliary signals driven Transformer-based models are with solid capabilities to outperform previous approaches on both medical terminology classification and paragraph generation metrics.

4.
2022 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2022 ; : 2803-2807, 2022.
Article in English | Scopus | ID: covidwho-2237366

ABSTRACT

The outbreak of COVID-19 pandemic has spread rapidly and severely affected all aspects of human lives. Recent researches has shown artificial intelligence and deep learning based approaches have achieved successful results in detecting diseases. How to accurately and quickly detect COVID-19 has always been the core topic of research. In this paper, we propose a novel approach based on prompt learning for COVID-19 diagnosis. Different from the traditional 'pre-training, fine-tuning' paradigm, we propose the prompt-based method that redefine the COVID-19 diagnosis as a masked predict task. Specifically, we adopt an attention mechanism to learn the multi-modal representation of medical image and text, and manually construct a cloze prompt template and a label word set. Selecting the label word corresponding to the maximum probability by pre-training language model. Finally, mapping the prediction results to the disease categories. Experimental results show that our proposed method obtains obvious improvement of 1.2% in terms of Mi-F1 score compared with the state-of-the-art methods. © 2022 IEEE.

5.
2022 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2022 ; : 2803-2807, 2022.
Article in English | Scopus | ID: covidwho-2223064

ABSTRACT

The outbreak of COVID-19 pandemic has spread rapidly and severely affected all aspects of human lives. Recent researches has shown artificial intelligence and deep learning based approaches have achieved successful results in detecting diseases. How to accurately and quickly detect COVID-19 has always been the core topic of research. In this paper, we propose a novel approach based on prompt learning for COVID-19 diagnosis. Different from the traditional 'pre-training, fine-tuning' paradigm, we propose the prompt-based method that redefine the COVID-19 diagnosis as a masked predict task. Specifically, we adopt an attention mechanism to learn the multi-modal representation of medical image and text, and manually construct a cloze prompt template and a label word set. Selecting the label word corresponding to the maximum probability by pre-training language model. Finally, mapping the prediction results to the disease categories. Experimental results show that our proposed method obtains obvious improvement of 1.2% in terms of Mi-F1 score compared with the state-of-the-art methods. © 2022 IEEE.

6.
2022 IET International Conference on Engineering Technologies and Applications, IET-ICETA 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2191944

ABSTRACT

The COVID-19 outbreak has had a serious impact on Taiwan's health care system. Deep Learning is an effective technology to help doctors make the most appropriate medical decisions for every patient in this crisis. In this study, we select four state-of-the-art Deep Learning-Xception, MobileNetv2, DenseNet169, and DenseNet201. Additionally, Transfer Learning is used for pre-training them before four models individually classify normal and positive COVID-19 chest X-ray images. Lastly, the best results reached 98.58% accuracy, 98.58% precision, and 98.42% recall. © 2022 IEEE.

7.
2022 Ieee International Conference on Acoustics, Speech and Signal Processing (Icassp) ; : 561-565, 2022.
Article in English | Web of Science | ID: covidwho-2191814

ABSTRACT

A rapid-accurate detection method for COVID-19 is rather important for avoiding its pandemic. In this work, we propose a bi-directional long short-term memory (BiLSTM) network based COVID-19 detection method using breath/speech/cough signals. Three kinds of acoustic signals are taken to train the network and individual models for three tasks are built, respectively, whose parameters are averaged to obtain an average model, which is then used as the initialization for the BiLSTM model training of each task. It is shown that such an initialization method can significantly improve the detection performance on three tasks. This is called supervised pre-training based detection. Besides, we utilize an existing pre-trained wav2vec2.0 model and pre-train it using the DiCOVA dataset, which is utilized to extract a high-level representation as the model input to replace conventional mel-frequency cepstral coefficients (MFCC) features. This is called self-supervised pre-training based detection. To reduce the information redundancy contained in the recorded sounds, silent segment removal, amplitude normalization and time-frequency masking are also considered. The proposed detection model is evaluated on the DiCOVA dataset and results show that our method achieves an area under curve (AUC) score of 88.44% on blind test in the fusion track. It is shown that using high-level features together with MFCC features is helpful for diagnosing accuracy.

8.
Jisuanji Gongcheng/Computer Engineering ; 48(3):17-22, 2022.
Article in Chinese | Scopus | ID: covidwho-2145859

ABSTRACT

The COVID-19 pandemic has had a serious impact on the global society. Building a mathematical model to predict the number of confirmed cases will help provide a basis for public health decision-making.In a complex and changeable external environment, the infectious disease prediction model based on deep learning has become commonly researched. However, the existing models have high requirements regarding the amount of data and cannot adapt to a scene with scarce data during supervised learning. This results in the reduction of model prediction accuracy.The COVID-19 prediction model P-GRU combined with pre-training and fine-tuning strategy is constructed in this study. By adopting the pre-training strategy on the dataset obtained from a specific region, the model is exposed to more epidemic data in advance. Consequently, it can learn the implicit evolution law of COVID-19, provide more sufficient prior knowledge for model prediction, and use the fixed length series containing recent historical information to predict the number of confirmed cases in the future.During the prediction process, the impact of local restrictive policies on the epidemic trend is considered to realize an accurate prediction of the dataset in the target area. The experimental results demonstrate that the pre-training strategy can effectively improve the prediction performance.Compared to Convolution Neural Network (CNN), Recurrent Neural Network (RNN), Long and Short Term Memory (LSTM) network, and Gated Recurrent Unit (GRU) models, P-GRU model attains excellent performance regarding the Mean Absolute Percentage Error (MAPE) and Root Mean Squared Error (RMSE) evaluation indexes. Furthermore, it is more suitable for predicting the transmission trend of COVID-19. © 2022, Editorial Office of Computer Engineering. All rights reserved.

9.
Int J Environ Res Public Health ; 19(19)2022 Oct 02.
Article in English | MEDLINE | ID: covidwho-2066016

ABSTRACT

The occurrence of major health events can have a significant impact on public mood and mental health. In this study, we selected Shanghai during the 2019 novel coronavirus pandemic as a case study and Weibo texts as the data source. The ERNIE pre-training model was used to classify the text data into five emotional categories: gratitude, confidence, sadness, anger, and no emotion. The changes in public sentiment and potential influencing factors were analyzed with the emotional sequence diagram method. We also examined the causal relationship between the epidemic and public sentiment, as well as positive and negative emotions. The study found: (1) public sentiment during the epidemic was primarily affected by public behavior, government behavior, and the severity of the epidemic. (2) From the perspective of time series changes, the changes in public emotions during the epidemic were divided into emotional fermentation, emotional climax, and emotional chaos periods. (3) There was a clear causal relationship between the epidemic and the changes in public emotions, and the impact on negative emotions was greater than that of positive emotions. Additionally, positive emotions had a certain inhibitory effect on negative emotions.


Subject(s)
COVID-19 , Social Media , Attitude , COVID-19/epidemiology , China/epidemiology , Emergencies , Emotions , Humans , Pandemics
10.
28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2022 ; : 3357-3365, 2022.
Article in English | Scopus | ID: covidwho-2020395

ABSTRACT

The outbreak of COVID-19 burgeons newborn services on online platforms and simultaneously buoys multifarious online fraud activities. Due to the rapid technological and commercial innovation that opens up an ever-expanding set of products, the insufficient labeling data renders existing supervised or semi-supervised fraud detection models ineffective in these emerging services. However, the ever accumulated user behavioral data on online platforms might be helpful in improving the performance of fraud detection on newborn services. To this end, in this paper, we propose to pre-train user behavior sequences, which consist of orderly arranged actions, from the large-scale unlabeled data sources for online fraud detection. Recent studies illustrate accurate extraction of user intentions∼(formed by consecutive actions) in behavioral sequences can propel improvements in the performance of online fraud detection. By anatomizing the characteristic of online fraud activities, we devise a model named UB-PTM that learns knowledge of fraud activities by three agent tasks at different granularities, i.e., action, intention, and sequence levels, from large-scale unlabeled data. Extensive experiments on three downstream transaction and user-level online fraud detection tasks demonstrate that our UB-PTM is able to outperform the state-of-the-art designing for specific tasks. © 2022 ACM.

11.
Lecture Notes on Data Engineering and Communications Technologies ; 148:406-415, 2022.
Article in English | Scopus | ID: covidwho-2013996

ABSTRACT

This paper applies self-supervised learning to diagnose coronavirus disease (COVID-19) among other pneumonia and normal cases based on chest Computed Tomography (CT) images. Being aware that medical imaging in real-world scenarios lacks well-verified and explicitly labeled datasets, which is known as a big challenge for supervised learning, we utilize Momentum Contrast v2 (MoCo v2) algorithm to pre-train our proposed Self-Supervised Medical Imaging Network (SSL-MedImNet) with remarkable generalization from substantial unlabeled data. The proposed model achieves competitive and promising performance in COVIDx CT-2, which is a well-known and high-quality dataset for COVID-19 assessment. Besides, its pre-trained representations can be transferred well for the diagnosis task. Moreover, SSL-MedImNet approximately matches its supervised candidates COVID-Net CT-1 and COVID-Net CT-2 by small distinctions. In particular, with only some additional dense layers, the proposed model achieves COVID-19 accuracy of 88.3% and specificity of 98.4% approximately, and competitive results for normal and pneumonia cases. The results advocate the potential of self-supervised learning to accomplish highly generalized understanding from unlabeled medical images and then transfer it for relevant supervised tasks in real scenarios. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

12.
26th Annual Conference on Medical Image Understanding and Analysis, MIUA 2022 ; 13413 LNCS:234-250, 2022.
Article in English | Scopus | ID: covidwho-2013942

ABSTRACT

Quick and accurate diagnosis is of paramount importance to mitigate the effects of COVID-19 infection, particularly for severe cases. Enormous effort has been put towards developing deep learning methods to classify and detect COVID-19 infections from chest radiography images. However, recently some questions have been raised surrounding the clinical viability and effectiveness of such methods. In this work, we investigate the impact of multi-task learning (classification and segmentation) on the ability of CNNs to differentiate between various appearances of COVID-19 infections in the lung. We also employ self-supervised pre-training approaches, namely MoCo and inpainting-CXR, to eliminate the dependence on expensive ground truth annotations for COVID-19 classification. Finally, we conduct a critical evaluation of the models to assess their deploy-readiness and provide insights into the difficulties of fine-grained COVID-19 multi-class classification from chest X-rays. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

13.
ACM BCB ; 20222022 Aug.
Article in English | MEDLINE | ID: covidwho-1993099

ABSTRACT

Clinical EHR data is naturally heterogeneous, where it contains abundant sub-phenotype. Such diversity creates challenges for outcome prediction using a machine learning model since it leads to high intra-class variance. To address this issue, we propose a supervised pre-training model with a unique embedded k-nearest-neighbor positive sampling strategy. We demonstrate the enhanced performance value of this framework theoretically and show that it yields highly competitive experimental results in predicting patient mortality in real-world COVID-19 EHR data with a total of over 7,000 patients admitted to a large, urban health system. Our method achieves a better AUROC prediction score of 0.872, which outperforms the alternative pre-training models and traditional machine learning methods. Additionally, our method performs much better when the training data size is small (345 training instances).

14.
2nd International Conference on Intelligent Systems and Pattern Recognition, ISPR 2022 ; 1589 CCIS:78-89, 2022.
Article in English | Scopus | ID: covidwho-1930342

ABSTRACT

Most of existing computer vision applications rely on models trained on supervised corpora, this is contradictory to what the world is seeing with the explosion of massive sets of unlabeled data. In the field of medical imaging for example, creating labels is extremely time-consuming because professionals should spend countless hours looking at images to manually annotate, segment, etc. Recently, several works are looking for solutions to the challenge of learning effective visual representations with no human supervision. In this work, we investigate the potential of using a self-supervised learning as a pretraining phase in improving the classification of radiographic images when the amount of available annotated data is small. To do that, we propose to use a self-supervised framework by pretraining a deep encoder with contrastive learning on a chest X-ray dataset using no labels at all, and then fine-tuning it using only few labeled data samples. We experimentally demonstrate that an unsupervised pretraining on unlabeled data is able to learn useful representation from Chest X-ray images, and only few labeled data samples are sufficient to reach the same accuracy of a supervised model learnt on the whole annotated dataset. © 2022, Springer Nature Switzerland AG.

15.
Medical Imaging 2022: Computer-Aided Diagnosis ; 12033, 2022.
Article in English | Scopus | ID: covidwho-1923076

ABSTRACT

Automated analysis of chest imaging in coronavirus disease (COVID-19) has mostly been performed on smaller datasets leading to overfitting and poor generalizability. Training of deep neural networks on large datasets requires data labels. This is not always available and can be expensive to obtain. Self-supervision is being increasingly used in various medical imaging tasks to leverage large amount of unlabeled data during pretraining. Our proposed approach pretrains a vision transformer to perform two self-supervision tasks - image reconstruction and contrastive learning on a Chest Xray (CXR) dataset. In the process, we generate more robust image embeddings. The reconstruction module models visual semantics within the lung fields by reconstructing the input image through a mechanism which mimics denoising and autoencoding. On the other hand, the constrastive learning module learns the concept of similarity between two texture representations. After pretraining, the vision transformer is used as a feature extractor towards a clinical outcome prediction task on our target dataset. The pretraining multi-kaggle dataset comprises 27499 CXR scans while our target dataset contains 530 images. Specifically, our framework predicts ventilation and mortality outcomes for COVID-19 positive patients using baseline CXR. We compare our method against a baseline approach using pretrained ResNet50 features. Experimental results demonstrate that our proposed approach outperforms the supervised method. © 2022 SPIE.

16.
2nd International Conference on Digital Futures and Transformative Technologies, ICoDT2 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1922689

ABSTRACT

In transfer learning a model is pre-trained on a large unsupervised dataset and then fine-tuned on domain-specific downstream tasks. BERT is the first true-natured deep bidirectional language model which reads the input from both sides of input to better understand the context of a sentence by solely relying on the Attention mechanism. This study presents a Twitter Modified BERT (TM-BERT) based upon Transformer architecture. It has also developed a new Covid-19 Vaccination Sentiment Analysis Task (CV-SAT) and a COVID-19 unsupervised pre-training dataset containing (70K) tweets. BERT achieved (0.70) and (0.76) accuracy when fine-tuned on CV-SAT, whereas TM-BERT achieved (0.89), a (19%) and (13%) accuracy over BERT. Another enhancement introduced is in terms of time efficiency as BERT takes (64) hours of pre-training while TM-BERT takes only (17) hours and still produces (19%) improvement even after pre-trained on four (4) times fewer data. © 2022 IEEE.

17.
2nd International Conference on Electronics, Communications and Information Technology, CECIT 2021 ; : 245-249, 2021.
Article in English | Scopus | ID: covidwho-1831726

ABSTRACT

An analysis model of epidemic speech based on BiLSTM and MCNN structure is proposed in order to know the news and information about COVID-19 in time and the views and focus of citizens on the situation. BERT pre-training model is used to extract word vectors, and then the information of bidirectional long-term and short-term memory network and convolution neural network models at different levels is fused. Finally, the speech is classified into two categories, and whether its emotion is positive or negative is calculated. Experimental results show that this model can better classify the polarity of speech emotion than the previous word vector model and the traditional neural network model. © 2021 IEEE.

18.
2nd IEEE International Conference on Power, Electronics and Computer Applications, ICPECA 2022 ; : 1179-1183, 2022.
Article in English | Scopus | ID: covidwho-1788729

ABSTRACT

This experiment analyzed 100,000 epidemic-related microblogs officially provided by the CCF. Using Enhanced Representation through Knowledge Integration (ERNIE), the effect of pre-training model on extracting Chinese semantic information was improved. After that, the deep pyramid network (DPCNN) was merged with ERNIE to save computing costs. Enhanced feature extraction performance for long-distance text. This model was the most effective in the comparison test of six emotional three-category tasks, which improved the accuracy of BERT pre-training model by 7%. © 2022 IEEE.

19.
2021 International Conference on Signal Processing and Machine Learning, CONF-SPML 2021 ; : 186-189, 2021.
Article in English | Scopus | ID: covidwho-1769548

ABSTRACT

The prevalence of COVID-19 has illuminated the need for practical digital education tools over the past year. With students studying from home, teachers have struggled to provide their students with adequately challenging coursework. Our project aims to solve this issue in the context of math. More specifically, our goal is to encourage thoughtful learning by supplying students with personalized two-number addition problems that take time to solve but expect the student to answer correctly still. Our solution is to model the process of selecting a math problem to give a student as a Markov Decision Process (MDP) and then use Q-learning to determine the best policy for arriving at the most optimally challenging two-number addition problem for that student. The project creates three student simulators based on group member data. We show that it took student one: $(162 \pm 134)$ iterations to give appropriate level problems where the first entry is mean and the second is the standard deviation. Student two took $(230 \pm 205)$ iterations, and student three took $(247 \pm 236)$ iterations. Lastly, we demonstrate that pre-training our model on students two and three and testing on student one showed a significant improvement from $(162 \pm 134)$ iterations to $(35 \pm 44)$ iterations. © 2021 IEEE.

20.
Joint 10th International Conference on Informatics, Electronics and Vision, ICIEV 2021 and 2021 5th International Conference on Imaging, Vision and Pattern Recognition, icIVPR 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1752398

ABSTRACT

During this pandemic situation Chest, X-rays may play a vital role in the diagnosis of COVID-19. The shortage of labeled medical images becomes this diagnosis more challenging. We established an efficient transfer learning method for classifying COVID-19 chest X-rays. We also gathered images from the publicly available chest x-ray datasets. We built an effective classifier for our pre-trained model with the latest state-of-the-art activation function Mish, Batch Normalization, and Dropout Layer. Our classifier efficiently detects Covid-19, Pneumonia, and normal case by differentiating inflammation in the lungs. Furthermore, we used the recent state-of-the-art idea of semi-supervised Noisy Student Training in our EfficientNet Architecture model and compared it with other benchmark models. We found that our proposed model performs well by using benchmark evaluation metrics(accuracy, F1 score, and ROC(AUC)) and our ROC(AUC) score of 98% overall. After that, we visually interpreted our training model with a saliency map to make it more understandable. Contribution: We contributed an improved three-class classifier part using the new state-of-the-art activation function Mish for the EfficientNet Transfer Learning model and improved the accuracy of Covid-19 Classification through Semi-Supervised Noisy Student training. © 2021 IEEE

SELECTION OF CITATIONS
SEARCH DETAIL